游戏理论一直是控制疾病传播并提出个人和地区级别最佳政策的有效工具。在此AMS通知文章中,我们关注Covid-19的干预的决策制定,旨在提供数学模型和有效的机器学习方法,以及对过去实施的相关政策的理由,并如何解释当局如何解释当局从游戏理论的角度来看,决策会影响其邻近地区。
translated by 谷歌翻译
希望启用能够自动组装的机器人。对物体部件的结构理解在这项任务中起着至关重要的作用,但仍未探索。在本文中,我们专注于从一组零件几何形状组中设置家具组件的设置,这实质上是一个六型零件姿势估计问题。我们提出了一个基于多层变压器的框架,该框架涉及零件之间的几何和关系推理,以迭代地更新零件。我们仔细设计了一个独特的实例编码,以解决几何相似零件之间的歧义,以便可以区分所有零件。除了从头开始组装外,我们还将我们的框架扩展到一个名为“进程零件组件”的新任务。类似于家具维护,它要求机器人继续使用未完成的产品,并将其余部分组装成适当的位置。我们的方法在公共Partnet数据集上的多个指标中的最新指标比当前的最新指标取得了10%以上。广泛的实验和定量比较证明了所提出的框架的有效性。
translated by 谷歌翻译
最近,提出了注意力任意样式转移方法来实现细粒度的结果,其操纵内容和风格特征之间的点亮相似性。然而,基于特征点的注意机构忽略了特征多歧管分布,其中每个特征歧管对应于图像中的语义区域。因此,通过来自各种样式语义区域的高度不同模式来呈现均匀内容语义区域,通过视觉伪像产生不一致的程式化结果。我们提出了逐步的注意力歧管对齐(PAMA)来缓解这个问题,这反复应用关注操作和空间感知的插值。根据内容特征的空间分布,注意操作重新排列风格特性。这使得内容和样式歧管对应于特征映射。然后,空间感知插值自适应地在相应的内容和样式歧管之间插入以增加它们的相似性。通过逐步将内容歧管对准风格歧管,所提出的PAMA实现了最先进的性能,同时避免了语义区域的不一致。代码可在https://github.com/computer-vision2022/pama获得。
translated by 谷歌翻译
在弱照明条件下捕获的图像可能会严重降低图像质量。求解一系列低光图像的降解可以有效地提高图像的视觉质量和高级视觉任务的性能。在本研究中,提出了一种新的基于RETINEX的实际网络(R2RNET),用于低光图像增强,其包括三个子网:DECOM-NET,DENOISE-NET和RELIGHT-NET。这三个子网分别用于分解,去噪,对比增强和细节保存。我们的R2RNET不仅使用图像的空间信息来提高对比度,还使用频率信息来保留细节。因此,我们的模型对所有退化的图像进行了更强大的结果。与在合成图像上培训的最先前的方法不同,我们收集了第一个大型现实世界配对的低/普通灯图像数据集(LSRW数据集),以满足培训要求,使我们的模型具有更好的现实世界中的泛化性能场景。对公共数据集的广泛实验表明,我们的方法在定量和视觉上以现有的最先进方法优于现有的现有方法。此外,我们的结果表明,通过使用我们在低光条件下的方法获得的增强的结果,可以有效地改善高级视觉任务(即面部检测)的性能。我们的代码和LSRW数据集可用于:https://github.com/abcdef2000/r2rnet。
translated by 谷歌翻译
旨在为每个文档分配主题标签的文档分类在各种应用程序中扮演基本作用。尽管在传统的监督文件分类中存在现有研究的成功,但它们不太关注两个真正的问题:(1)元数据的存在:在许多域中,文本伴随着作者和标签等各种附加信息。此类元数据充当令人信服的主题指标,应将其利用到分类框架中; (2)标签稀缺性:在某些情况下,标记的训练样本价格昂贵,只需要使用一小组注释数据来执行分类。为了认识到这两个挑战,我们提出了MetaCAT,是一个最小的监督框架,可以用元数据分类文本。具体地,我们开发了一个生成过程,描述了单词,文档,标签和元数据之间的关系。由生成模型引导,我们将文本和元数据嵌入到相同的语义空间中以编码异构信号。然后,基于相同的生成过程,我们综合训练样本来解决标签稀缺的瓶颈。我们对各种数据集进行了彻底的评估。实验结果证明了Metacat在许多竞争基础上的有效性。
translated by 谷歌翻译
GitHub已成为代码共享和科学交流的重要平台。使用大量的存储库可用,需要基于主题的搜索需求。即使介绍了主题标签功能,大多数GitHub存储库都没有任何标签,阻碍了搜索和基于主题的分析。这项工作将自动存储库分类问题定位为关键字驱动的分层分类。具体而言,用户只需要提供具有关键字的标签层次结构以作为监控提供。此设置灵活,适用于用户的需求,占主题标签的不同粒度,需要最小的人力努力。我们确定了这个问题的三个关键挑战,即(1)多模态信号的存在; (2)监督稀缺和偏见; (3)监督格式不匹配。为了认识到这些挑战,我们提出了一种HIGITCLASS框架,包括三个模块:异构信息网络嵌入;关键词富集;主题建模和伪文档生成。在两个GitHub存储库集合上的实验结果证实,HIGITCLASS优于现有的弱监督和DATALESS分层分类方法,尤其是集成了用于存储库分类的结构化和非结构化数据的能力。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译